[Salon] Israel is joining the first global AI convention, here’s why that’s dangerous



https://mondoweiss.net/2024/09/israel-is-joining-the-first-global-ai-convention-heres-why-thats-dangerous/?ml_recipient=132723011027469906&ml_link=132722949158340211

Israel is joining the first global AI convention, here’s why that’s dangerous

Over the last year Israel has weaponized AI in its genocide in Gaza, deploying AI-driven surveillance and automated targeting systems which has killed tens of thousands. Israel’s participation in the first global AI treaty raises serious questions.
By Mona Shtaya September 16, 2024
Destruction in the vicinity of al-Shifa Hospital, Gaza City, April 2, 2024. (Photo: © Omar Ishaq/dpa via ZUMA Press/APA Images) Destruction in the vicinity of al-Shifa Hospital, Gaza City, April 2, 2024. (Photo: © Omar Ishaq/dpa via ZUMA Press/APA Images)

It is deeply troubling that Israel has been allowed to join the first global treaty on artificial intelligence (AI)—an agreement meant to regulate AI’s responsible use while upholding human rights, democracy, and the rule of law. For 11 consecutive months, Israel has weaponized AI in its genocide in Gaza, deploying AI-driven surveillance and automated targeting systems that have inflicted devastating civilian harm. Yet, Israel is now celebrating its participation in this treaty alongside the U.S., U.K., and EU, after spending two years at the negotiating table and helping draft the first international AI treaty for ethical AI governance. This contradiction exposes glaring hypocrisy and raises serious questions about the international community’s true commitment to accountability.

This treaty applies primarily to public sector AI but also addresses private sector risks. The signatories of the treaty agree to uphold principles like transparency, accountability, and non-discrimination, and commit to establishing remedies for AI-related human rights violations. The treaty mandates risk assessments, mitigation measures, and graded obligations based on specific contexts, ensuring flexibility in its application.

Since the beginning of this ongoing genocide, Israel has been weaponizing AI and advanced technologies to carry out the massive and indiscriminate killing of civilians. The Apartheid state has harnessed AI for surveillance, targeting, and decision-making. Israel has intensified its efforts to control and oppress the people in the Gaza Strip, continuing a long history of systematic oppression of the Palestinian people. This misuse of technology raises profound concerns, leading to devastating consequences for innocent lives caught in the crossfire.

Unlike the “AI for the common good” agenda outlined in global AI treaties, Israel’s AI program “Lavender” has emerged as a dark centerpiece of the ongoing genocide in Gaza. Just two weeks into the war, Lavender’s kill lists were automatically approved, targeting suspected militants—including many low-ranking members of Hamas and Palestinian Islamic Jihad—with minimal human oversight. In the early weeks of the war, Lavender flagged up to 37,000 Palestinians and their homes as bombing targets. This weaponization of AI has led to devastating civilian casualties, as Lavender’s broad and error-prone criteria resulted in indiscriminate attacks on homes, causing a horrific death toll. Unlike other systems such as “The Gospel,” which targets buildings, Lavender focuses on individuals, magnifying the tragedy of its missteps.

While the international treaty champions the responsible use of AI, upholding human rights and the rule of law, Israel’s AI system “Habsora,” or “The Gospel,” stands in stark contrast. Deployed since the onset of the Gaza war, Habsora acts as a highly automated target generation tool for Israeli military operations. Capable of swiftly producing target lists, it facilitates extensive strikes on residential homes, including those of low-ranking Hamas members. Since October 7, Habsora has led to significant civilian casualties, with strikes often hitting homes without confirmed militant presence. The system’s broad targeting criteria and minimal oversight have resulted in massive destruction, as well as the erasure of the Gaza Strip’s geographic features, and the annihilation of its people.

Prior to this war, and in the last two years, while the surveillance state was contributing to drafting the Convention agenda, Israel was not idle but rather systematically automating its apartheid system. From the so-called ‘smart shooter’ in Hebron to facial recognition technologies, it has weaponized groundbreaking technologies to target and kill Palestinians. 

Between 2020 and 2021, investigations revealed Israel’s increasing reliance on advanced surveillance and predictive technologies to control Palestinians. This digital oversight, part of a broader strategy, operates both as a means of repression and as a commercial venture, with Israel testing its surveillance techniques on Palestinians before marketing them to repressive regimes globally. The surveillance state employs extensive surveillance, including facial recognition and automated tracking systems like “Blue Wolf” and “Red Wolf,” to monitor and intrude on Palestinian daily life. These technologies, coupled with Israel’s control over the Information and Communication Technology (ICT) infrastructure, intensify the sense of constant scrutiny, infringing on privacy and stifling freedom of _expression_, as well as threatening the Palestinian accessibility to the Internet, and shutting it down whenever the oppressive state decides to cover up its war crimes. This approach not only reinforces the Automated Apartheid but also serves as a troubling example of how AI among other technologies have been weaponized to serve the securitization and militarization of an Apartheid state, which has now been given, on a golden platter, the opportunity to shape and join the first AI treaty. 

Israel’s participation in the first global AI treaty, intended to promote ethical AI use and uphold human rights, starkly contrasts with its actual practices. For months, Israel has used advanced AI systems like “Lavender” and “Habsora” to target and kill civilians in Gaza, all while celebrating its role in drafting a treaty that claims to ensure responsible AI governance. This contradiction not only exposes a troubling hypocrisy but also raises serious questions about the international community’s commitment to genuine accountability. As Israel continues to exploit AI for oppression, it undermines the very principles the treaty was meant to uphold. The world must scrutinize this disparity and hold Israel into account to prevent the abuse of technology and protect human rights globally.



This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.